20 research outputs found

    On Attitudes Toward Spanish Varieties: A Bilingual Perspective

    Get PDF
    This study explores the attitudes of 25 English-Spanish bilingual speakers from Tucson (Arizona) towards their own variety and compares them with their attitudes toward monolingual varieties of Mexican (from Hermosillo) and Peninsular Spanish (from Murcia and Madrid). Our analysis points to a clear influence of the standard language ideology (MILROY, 2001) on shaping these attitudes, escalated by a tendency among bilinguals in diglossic societies to feel insecure about their own variety as a minority language, or towards a feeling of linguistic self-hatred. - Este estudo explora as atitudes de 25 falantes bilíngues de espanhol e inglês de Tucson (Arizona) com relação à sua própria variedade linguística, comparando essas atitudes em relação ao espanhol de Hermosillo (México) e a duas variedades peninsulares: a espanhola de Múrcia e a de Madrid. Nossa análise mostra uma clara influência da ideologia da língua padrão (MILROY, 2001) em determinar essas atitudes, agravada pela tendência de os falantes bilíngues de sociedades diglóssicas se sentirem inseguros quanto à sua própria variedade como uma língua minoritária, ou relacionada a um sentimento de auto-ódio linguístico

    Multi-channel MRI segmentation of eye structures and tumors using patient-specific features

    Get PDF
    Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed

    Oblique S and T Constraints on Electroweak Strongly-Coupled Models with a Light Higgs

    Get PDF
    Using a general effective Lagrangian implementing the chiral symmetry breaking SU(2)L x SU(2)R -> SU(2){L+R}, we present a one-loop calculation of the oblique S and T parameters within electroweak strongly-coupled models with a light scalar. Imposing a proper ultraviolet behaviour, we determine S and T at next-to-leading order in terms of a few resonance parameters. The constraints from the global fit to electroweak precision data force the massive vector and axial-vector states to be heavy, with masses above the TeV scale, and suggest that the W+W- and ZZ couplings of the Higgs-like scalar should be close to the Standard Model value. Our findings are generic, since they only rely on soft requirements on the short-distance properties of the underlying strongly-coupled theory, which are widely satisfied in more specific scenarios

    Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography.

    No full text
    Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency

    Ocular Structures Segmentation from Multi-sequences MRI Using 3D Unet with Fully Connected CRFs

    No full text
    The use of 3D Magnetic Resonance Imaging (MRI) has attracted growing attention for the purpose of diagnosis and treatment planning of intraocular ocular cancers. Precise segmentation of such tumors are highly important to characterize tumors, their progression and to define a treatment plan. Along this line, automatic and effective segmentation of tumors and healthy eye anatomy would be of great value. The major challenge to this end however lies in the disease variability encountered over different populations, often imaged under different acquisition conditions and high heterogeneity of tumor characterization in location, size and appearance. In this work, we consider the Retinoblastoma disease, the most common eye cancer in children. To provide automated segmentations of relevant structures, a multi-sequences MRI dataset of 72 subjects is introduced, collected across different clinical sites with different magnetic fields (3T and 1.5T), with healthy and pathological subjects (children and adults). Using this data, we present a framework to segment both healthy and pathological eye structures. In particular, we make use of a 3D U-net CNN whereby using four encoder and decoder layers to produce conditional probabilities of different eye structures. These are further refined using a Conditional Random Field with Gaussian kernels to maximize label agreement between similar voxels in multi-sequence MRIs. We show experimentally that our approach brings state-of-the-art performances for several relevant eye structures and that these results are promising for use in clinical practice

    Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning.

    No full text
    To develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach. One hundred seven spectral domain optical coherence tomography (OCT) cube volumes were extracted from nAMD eyes. Manual annotation of IRF, SRF, and PED was performed. Ninety-two OCT volumes served as training and validation set, and 15 OCT volumes from different patients as test set. The performance of our fluid segmentation method was quantified by means of pixel-wise metrics and volume correlations and compared to other methods. Repeatability was tested on 42 other eyes with five OCT volume scans acquired on the same day. The fully automated algorithm achieved good performance for the detection of IRF, SRF, and PED. The area under the curve for detection, sensitivity, and specificity was 0.97, 0.95, and 0.99, respectively. The correlation coefficients for the fluid volumes were 0.99, 0.99, and 0.91, respectively. The Dice score was 0.73, 0.67, and 0.82, respectively. For the largest volume quartiles the Dice scores were >0.90. Including retinal layer segmentation contributed positively to the performance. The repeatability of volume prediction showed a standard deviations of 4.0 nL, 3.5 nL, and 20.0 nL for IRF, SRF, and PED, respectively. The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance. Potential applications include measurements of specific fluid compartments with high reproducibility, assistance in treatment decisions, and the diagnostic or scientific evaluation of relevant subgroups

    Automated foveal location detection on spectral-domain optical coherence tomography in geographic atrophy patients.

    No full text
    To develop a fully automated algorithm for accurate detection of fovea location in atrophic age-related macular degeneration (AMD), based on spectral-domain optical coherence tomography (SD-OCT) scans. Image processing was conducted on a cohort of patients affected by geographic atrophy (GA). SD-OCT images (cube volume) from 55 eyes (51 patients) were extracted and processed with a layer segmentation algorithm to segment Ganglion Cell Layer (GCL) and Inner Plexiform Layer (IPL). Their en face thickness projection was convolved with a 2D Gaussian filter to find the global maximum, which corresponded to the detected fovea. The detection accuracy was evaluated by computing the distance between manual annotation and predicted location. The mean total location error was 0.101±0.145mm; the mean error in horizontal and vertical en face axes was 0.064±0.140mm and 0.063±0.060mm, respectively. The mean error for foveal and extrafoveal retinal pigment epithelium and outer retinal atrophy (RORA) was 0.096±0.070mm and 0.107±0.212mm, respectively. Our method obtained a significantly smaller error than the fovea localization algorithm inbuilt in the OCT device (0.313±0.283mm, p <.001) or a method based on the thinnest central retinal thickness (0.843±1.221, p <.001). Significant outliers are depicted with the reliability score of the method. Despite retinal anatomical alterations related to GA, the presented algorithm was able to detect the foveal location on SD-OCT cubes with high reliability. Such an algorithm could be useful for studying structural-functional correlations in atrophic AMD and could have further applications in different retinal pathologies

    Landmark detection for fusion of fundus and MRI toward a patient-specific multimodal eye model.

    Get PDF
    Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye
    corecore